An Empirical Study of Dynamic Variable OrderingHeuristics for the Constraint Satisfaction
نویسندگان
چکیده
The constraint satisfaction community has developed a number of heuristics for variable ordering during backtracking search. For example, in conjunction with algorithms which check forwards, the FailFirst (FF) and Brelaz (Bz) heuristics are cheap to evaluate and are generally considered to be very e ective. Recent work to understand phase transitions in NP-complete problem classes enables us to compare such heuristics over a large range of di erent kinds of problems. Furthermore, we are now able to start to understand the reasons for the success, and therefore also the failure, of heuristics, and to introduce new heuristics which achieve the successes and avoid the failures. In this paper, we present a comparison of the Bz and FF heuristics in forward checking algorithms applied to randomly-generated binary CSP's. We also introduce new and very general heuristics and present an extensive study of these. These new heuristics are usually as good as or better than Bz and FF, and we identify problem classes where our new heuristics can be orders of magnitude better. The result is a deeper understanding of what helps heuristics to succeed or fail on hard random problems in the context of forward checking, and the identi cation of promising new heuristics worthy of further investigation. 1 Introduction In the constraint satisfaction problem (CSP) we are to assign values to variables such that a set of constraints is satis ed, or show that no satisfying assignment exists. This may be done via a systematic search process, such as depth rst search with backtracking, and this amounts to a sequence of decisions, where a decision is a choice of variable and value to assign to that variable. The order in which decisions are made can have a profound effect on search e ort. Dechter and Meiri's study of preprocessing techniques [3] shows that dynamic search rearrangement (DSR), ie a variable ordering heuristic that selects as next variable the one that has minimal number of values in its domain, dominated all other static orderings. Here, we present three new dynamic variable ordering (dvo) heuristic, derived as a result of our studies of phase transition phenomena of combinatorial problems, and compare these against two existing heuristics. Tsang, Borrett, and Kwan's study of CSP algorithms [21] shows that there does not appear to be a universally best algorithm, and that certain algorithms may be preferred under certain circumstances. We carry out a similar investigation with respect to variable ordering heuristics in an attempt to determine under what conditions one heuristic dominates another. 1 In the next section we give a background to the study, de ning the CSP and the algorithm that we use as a vehicle for investigating the heuristics. We then go on to describe four possible measures of the constrainedness of CSP's, and in the next section describe ve heuristics, based on these measures. The empirical study is reported in Section 5, and conclusions are drawn in Section 6. 2 Background A constraint satisfaction problem consists of a set of variables V , each variable v 2 V having a domain of values Mv of size mv, and a set of constraints C. Each constraint c 2 C of arity a restricts a tuple of variables hV1; : : : ; Vai, and speci es a subset of M1 M2 : : : Ma, each element of which is a combination of values the variables are forbidden to take simultaneously by this constraint. In a binary CSP, which the experiments reported here are exclusively concerned with, the constraints are all of arity 2. A solution to a CSP is an assignment of a value to every variable satisfying all the constraints. The problem that we address here is the decision problem, i.e. nding one solution or showing that none exists, and we do this via search. Broadly speaking, there are two classes of complete search algorithm for the CSP, namely those that check backwards and those that check forwards. In an algorithm that checks backwards, the current variable vi is instantiated and checking takes place against the (past) instantiated variables. If the instantiation is inconsistent then a new value is tried, and if no values remain then a past variable is reinstantiated. In algorithms that check forwards, the current variable is instantiated with a value and the (future) uninstantiated variables are made consistent, to some degree, with respect to that instantiation. Chronological backtracking (BT), backmarking (BM), backjumping (BJ), con ict-directed backjumping (CBJ), and dynamic backtracking (DB) might all be considered as algorithms that check backwards [11, 5, 6, 13, 10], whereas forward checking (FC) and maintaining arc-consistency (MAC) might be considered as algorithms that check forwards [12, 17]. This study investigates only forward checking algorithms, and in particular FC-CBJ. Algorithm FC instantiates variable vi with a value xi and removes from the domains of future variables any values that are inconsistent with respect to the current instantiation of vi. If the instantiation results in no values remaining in the domain of a future variable, then a new value is tried for vi and if no values remain for vi (i.e. a dead end is reached) then the previous variable is reinstantiated (i.e. chronological backtracking takes place). FCCBJ di ers from FC; on reaching a dead end the algorithm jumps back to a variable that is involved in a con ict with the current variable [14]. 2 In selecting an algorithm we will prefer one that takes less search e ort than another, where search e ort is measured as the number of times pairs of values are compared for compatibility, i.e. consistency checks. Generally, checking forwards reduces search e ort, as does jumping back. The order in which variables are chosen for instantiation profoundly in uences search e ort. We might think of the instantiation order as a trajectory through the state space, and if solutions (assuming they exist) are not uniformly distributed throughout the state space, then some trajectories will be shorter than others. Consequently, a good instantiation order should lead to reduced search e ort. Algorithms that check backwards tend to use variable ordering heuristics that exploit topological parameters, such as width, induced width or bandwidth, and correspond to static instantiation orders (i.e. they do not change during search) [20]. Algorithms that check forwards have additional information at their disposal, such as the current size of the domains of variables. Furthermore, since domain sizes may vary during the search process, forward checking algorithms may use dynamic variable ordering (dvo) heuristics [16], and it is this class of heuristics that is investigated here. 3 Constrainedness Many NP-complete problems like constraint satisfaction display a rapid transition in solubility as we increase the constrainedness of problem instances. This phase transition is associated with problems which are typically hard to solve [2]. Problems that are under-constrained tend to have many solutions. It is therefore usually easy to guess one of the solutions. Problems that are over-constrained, tend not to have any solutions, as there are many constraints, and any possible solution is usually quickly ruled out. A phase transition occurs inbetween when problems are \critically constrained". Such problems are usually di cult to solve as they are neither obviously soluble or insoluble. Problems from the phase transition are now routinely used to benchmark CSP and satis ability procedures [21, 4, 7]. Constrainedness can be used both to predict the position of a phase transition in solubility [22, 19, 15, 8] and, as we show in the next section, to motivate the construction of heuristics. In this section, we identify four measures which compute some aspect of constrainedness. All of these measures compute some aspect of the constrainedness of an ensemble of random problems. Such measures may suggest whether an individual problem from the ensemble is likely to be soluble. For example, a problem with larger domain sizes or looser constraints is more likely to be soluble than a problem with smaller domains or tighter constraints, all else being equal. To make computing such measures tractable, 3 we will ignore speci c features of problems (like the topology of the constraint graph) and consider just simple properties like domain sizes and average constraint tightness. One simple measure of constrainedness can be derived from the size of problems in the ensemble. Size is determined by both the number of variables and their domain sizes. Following [8, 9], we compute the size of problems via the size of the state space being explored. As the state space consists of all possible assignments of values to variables, its size is simply the product of the domain sizes, Qv2V mv. We de ne the size of the problem as the logarithm base 2 of the size of the state space. This is the number of bits needed to describe a point in the state space. N =def X v2V log2mv (1) As noted above, all else being equal, a large problem is likely to be less constrained and has a greater chance of being soluble than a small problem with the same number of variables. A second measure of constrainedness is the solution density of the ensemble. If the constraint c on average rules out a fraction pc of possible assignments, then a fraction 1 pc of assignments are allowed. The average solution density, is the average fraction of assignments allowed by all the constraints. Assuming independence between constraints, the average solution density over the ensemble is, = Y c2C(1 pc) (2) Problems with loose constraints have high solution density. As noted above, all else being equal, a problem with a high solution density is more likely to be soluble than a problem with a low solution density. A third measure of constrainedness is derived from the size and solution density. E(N), the expected number of solutions for a problem within an ensemble is simply the size of the state space times the probability that a given element in the state space is a solution. That is, E(N) = 2N (3) = Y v2V mv Y c2C(1 pc) If problems in an ensemble are expected to have a large number of solutions, then an individual problem within the ensemble is likely to be loosely constrained and to have many solutions. The fourth and nal measure of constrainedness, is again derived from the size and solution density. It has recently been suggested as a general measure of the \constrainedness" of combinatorial problems [9]. It is motivated 4 by the randomness with which we can set a bit in a solution to a combinatorial problem. If is small, then problems typically have many solutions and a given bit can be set more or less at random. If is large, then problems typically have few or no solutions and a given bit is very constrained in how it can be set. is de ned by, =def 1 log2(E(N)) N (4) = log2( ) N = Pc2C log2(1 pc) Pv2V log2(mv) (5) If 1 then problems have a large expected number of solutions for their size. They are therefore likely to be under-constrained and soluble. If 1 then problems have a small expected number of solutions for their size. They are therefore likely to be over-constrained and insoluble. A phase transition in solubility occurs inbetween where 1 [9]. This is equivalent for CSPs to the prediction made in [18] that a phase transition occurs when E(N) 1. 4 Heuristics for Constrainedness Many heuristics in CSPs and other combinatorial problems branch on what can often be seen as an estimate of the most constrained variable [9]. Here we describe two well known and successful heuristics for CSPs and three new heuristics. We use the four measures of constrainedness described in the previous section. Recall that these measures were de ned for an ensemble of problems. However, each of the measures can be computed for an individual problem but will give only an estimate for the constrainedness of an individual problem. For example, an insoluble problem has zero solution density and this may be very di erent from the measured value of . Nevertheless such measures can provide both a good indication of the probability of a solution existing and, as we show here, a heuristic estimate of the most constrained variable. 4.1 Heuristic FF Haralick and Elliott [12] proposed the failrst principle for CSPs heuristics as follows: \To succeed, try rst where you are most likely to fail." The principle suggests that we attempt next the task which is most likely to fail such that we encounter dead-ends early on and prune the search space. Applied as a constraint ordering heuristic this suggests that we check rst the constraints that are most likely to to fail and when applied as a variable 5 ordering heuristic, that we choose the most constrained variable. An estimate for the most constrained variable is the variable with the smallest domain. If a variable has a large domain then it should be easier to nd a consistent value than for a variable with a small domain. Consequently, we chose the variable with the smallest domain. We call this the FF heuristic. This is a dynamic variable ordering heuristic with forward checking algorithms as domain sizes vary during search. By picking the variable with the smallest domain, we branch into the subproblem with the maximum size, N . 4.2 Heuristic Bz The Brelaz heuristic (Bz) [1] comes from graph colouring; we wish to nd a colouring of the vertices in a graph such that adjacent vertices have di erent colours. Given a partial colouring of a graph, the saturation of a vertex is the number of di erently coloured vertices adjacent to a vertex. Consequently, if a vertex has high saturation it will tend to have less colours available to it. The Bz heuristic rst colours the vertex of maximum degree. Thereafter Bz selects an uncoloured vertex of maximum saturation, and tie-breaks on the degree in the uncoloured subgraph. Bz thus chooses to colour next what is estimated to be the most constrained vertices. Bz has much in common with FF. By chosing the vertex of maximum saturation, Bz chooses the vertex with least colours available to it. That is, Bz chooses a variable with smallest domain size. However, Bz also exploits topological information, namely the number of future variables adjacent to a variable. As topological information becomes redundant, Bz behave like FF. For example, as the average degree of variables within the constraint graph increases, Bz should tend to behave like FF. 4.3 Heuristic Rho The Rho heuristic branches into the subproblem that maximizes the solution density, . The intuition is to branch into the subproblem where the greatest fraction of states are expected to be solutions. That is, the subproblem with largest solution density, . Let Ci be the incident constraints on the variable i. Then to maximize , we branch on the variable that maximizes, Y c2C Ci(1 pc) That is, the variable that minimizes, Qc2Ci(1 pc). This is the variable with the most and/or tightest constraints. Again, we branch on an estimate of the most constrained variable. When we have uniform constraint tightness the Rho heuristic corresponds to a maximum degree ordering[3], an approximation of the minimum width ordering. 6 4.4 Heuristic E(N) The E(N) heuristic branches into the subproblem that maximizes the expected number of solutions, E(N). This will tend to maximize both the subproblem size (the FF heuristic) and its solution density (the Rho heuristic). Let N be the number of solutions to the current subproblem. At the root of the tree, N is the total number of solutions to the problem. If N=0, the current subproblems has no solutions, and the algorithm will at some point backtrack. If N=1, the current subproblem has exactly one solution, and N will remain constant on the path leading to this solution and drop to zero on every other path. As we move down the search tree, N decreases as we instantiate variables and cut out some of the solutions to the problem. Consider a loosely constrained variable i that can take any value in its domain. Branching on this variable will reduce N to N/mi. Hence loosely constrained variables tend to reduce N. Unfortunately, it is di cult to know N at each node in the search tree. Instead we use E(N) as an estimate for N. To maximize the likelihood that N is non-zero beneath us, we branch into the subproblem that maximizes E(N). That is, we branch on the variable that reduces E(N) the least. And this is again an estimate for the most constrained variable. If all variables have the same constraint tightness, this simpli es to maximizing N (the FF heuristic). If all variable have the same domain size (which is unlikely in forward checking algorithms except at the start of search), this simpli es to maximising (the Rho heuristic). In general, the E(N) heuristic prefers those variable involved in many constraints, and those variables with tight constraints. 4.5 Heuristic Kappa The Kappa heuristic branches into the subproblem that minimizes . This heuristic was rst suggested in [9] but has not yet been tested extensively on a range of CSPs. This heuristic depends on the proposal in [9] that captures a notion of the constrainedness of an ensemble of problems. We assume that provides an estimate for the constrainedness of an individual in that ensemble. Problems with 1 are likely to be under-constrained and soluble. Problems with 1 are likely to over-constrained and insoluble. We again want to branch on a variable that is estimated to be the most constrained, giving the least constrained subproblem. We estimate this by the subproblem with smallest . This suggests the heuristic of minimizing . Like the E(N) heuristic, the Kappa heuristics simpli es to maximizing N (the FF heuristic) if all variables have the same constraint tightness and simpli es to maximising (the Rho heuristic) if all variable have the same domain size. 7 4.6 Implementing the heuristics As stated earlier, all the above heuristics are used with the algorithm FCCBJ, i.e. a forward checking algorithm. After the current variable has successfully been assigned a value (i.e. after domain ltering all future variables have non-empty domains), the constraint tightness is recomputed for any constraint acting between a pair of variables, vj and vk, such that values have just been removed from the domain of vj or vk, or both. To compute constraint tightness pc for constraint c acting between variables vj and vk we count the number of con icting pairs across that constraint and divide by the product of the new domain sizes. This counting may be done via consistency checking and will take mj mk checks. Constraint tightness will then be in the range 0 (all pairs compatible) to 1 (all pairs are con icts). When computing the sum of the log looseness of constraints (i.e. the numerator of equation 5), if pc = 1 a value of 1 is returned. Consequently, the Kappa heuristic will select variable vj or vk next, and the instantiation will result in a dead end. In the FF heuristic the rst variable selected is the variable with smallest domain size, and when all variables have the same domain size we select rst the lowest indexed variable v1. For the Bz heuristic saturation is measured as the inverse of the domain size; i.e. the variable with smallest domain size will have largest saturation. Consequently, when the constraint graph is a clique FF and Bz will have identical behaviours. It should be noted that search costs in this paper do not include the cost in terms of consistency checks of re-computing the constraint tightness. This overhead makes some of the heuristics much less competitive than the analysis might suggest. However, our main concern here is to establish sound and general principles for selecting variable ordering heuristics. In the future, we hope to develop smart book-keeping techniques and approximations to the heuristics that reduce the cost of re-computing or estimating the constraint tightness but which still give good performance. 5 The Experiments The experiments reported below attempt to identify under what conditions one heuristic is better than another. Initially, experiments are performed over uniform randomly generated CSP. That is, in a problem hn;m; p1; p2i there will be n variables, each with a uniform domain of size m, there will be p1:n:(n 1) 2 constraints and exactly p2m2 con icts over each constraint [18]. This class of problem is then modi ed such that we investigate behaviour over problems with non-uniform domains and non-uniform constraint tightness. When presenting the results (graphically) we adopt the convention of 8 always measuring the problems in terms of their constrainedness, . We do this because in some experiments we vary (for example) the number of variables and keep the degree of variables constant (and we might have on the x-axis), vary the tightness of constraints (and we might have p2 on the x-axis), and so on. Therefore by using constrainedness we hope to get a clear picture of what happens. Furthermore, when we have non-uniform problems constrainedness appears to be one of the few measures that we can use. Problems with small values of are soluble and easy, problems with large are insoluble and easy, and problems with 1 tend to be in the phase transition and may be soluble or insoluble, and are typically hard. In all of the graphs we have attempted to retain the same line style for each of the heuristics. However, the labels in the graphs have been ordered, from top to bottom, such that they correspond with the ranking of the heuristics in the phase transition. Therefore, if FF performs worse than any other heuristic in a set of experiments its label will appear rst. 5.1 Uniform problems, varying constraint graph density p1 The aim of this experiment is to determine how the heuristics are a ected as we vary the number of constraints within the constraint graph. The experiments were performed over problems with 20 variables, each with a domain size of 101. In Figure 1, we plot the mean performance for sparse constraint graphs2 with p1 = 0:2, dense constraint graphs with p1 = 1:0 and constraint graphs of intermediate density with p1 = 0:5. At each density, we generated 1,000 problems at each possible value of p2 from 0:01 to 0:99 in steps of 0:01. For sparse constraint graphs (see Figure 1(a)), Bz performs best, whilst E(N) and Kappa are not far behind. Rho is signi cantly worse and FF even more so. Analysing the distribution in performance (graphs are not shown) e.g. the median, 95% and higher percentiles, we observed a similar ranking of the heuristics with the di erences between the heuristics opening up in the higher percentiles in the middle of the phase transition. As problems become more dense (see Figure 1(b)) Kappa dominates E(N). We conjecture that this is a result of the ensemble average for the number of solutions providing a better estimate for the actual number of solutions within a given subproblem as the graph density increases. Rho and FF continue to perform poorly, although FF does manage to overtake Rho. For complete graphs with p1 = 1:0 (see Figure 1(c)), Bz and FF are 1Similar problems have been used in several previous studies [15, 18, 8] 2Disconnected graphs in the sparse problems were not ltered out since they had little e ect on performance. 9 0 1000 2000 3000 4000 5000 6000 7000 0.6 0.7 0.8 0.9 1 1.1 1.2 m ea n ch ec ks Kappa FF Rho Kappa E(N) Bz (a) p1 = 0:2 0 10000 20000 30000 40000 50000 60000 0.6 0.7 0.8 0.9 1 1.1 1.2 m ea n ch ec ks Kappa Rho FF E(N) Kappa Bz (b) p1 = 0:5 0 50000 100000 150000 200000 250000 300000 350000 400000 0.6 0.7 0.8 0.9 1 1.1 1.2 m ea n ch ec ks Kappa Rho FF Bz E(N) Kappa (c) p1 = 1:0 100 1000 10000 100000 1e+06 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 m ea n ch ec ks Kappa Static FF Rho Kappa E(N) Bz (d) p1 = 0:2, also with static order Figure 1: Mean performance of heuristics for h20; 10i identical, as expected3. For uniform and sparse problems, Bz seem to be the heuristic of choice, whilst for uniform and dense problems, Kappa or E(N) would seem to be best. For comparison with the dynamic variable ordering heuristics, in Figure 1(d) we also plot the mean performance of FC-CBJ with a static variable ordering: variables were considered in lexicographic order. Performance is much worse with a static ordering than with any of the dynamic ordering heuristics, even on the relatively easy sparse constraint graphs. The secondary peaks for the static variable ordering at low occur as a result of ehps [19], occasional \exceptionally hard" problems that arise following poor branching decisions early in search [7]. The worst case outside the phase transition was more than 14 million checks at = 0:46, in a region where 100% of problems were soluble. This was 5 orders of magnitude worse than the median of 288 checks at this point. 3And in Figure 1(c) the contour for FF overwrites the Bz contour. 10 5.2 Uniform problems, varying number of variables n 0 10000 20000 30000 40000 50000 60000 0.6 0.7 0.8 0.9 1 1.1 1.2 m ea n ch ec ks Kappa FF Rho Kappa E(N) Bz (a) n = 30 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 0.6 0.7 0.8 0.9 1 1.1 1.2 m ea n ch ec ks Kappa FF Rho Kappa E(N) Bz (b) n = 50 1000 10000 100000 1e+06 1e+07 20 25 30 35 40 45 50 pe ak m ea n ch ec ks number of variables FF Rho K E(n) Bz (c) Peaks of mean performance over n Figure 2: Mean performance for FC-CBJ + heuristics for hn; 10i with = 5 The aim of this experiment is to determine how the heuristics scale with problem size. At rst sight, this can be simply done by increasing the number of variables n, while keeping all else constant. However, if n increases while p1 is kept constant the degree of a variable (i.e. the number of constraints incident on a variable) also increases. To avoid this, we vary p1 with n such that average degree remains constant, with = 5. To observe a phase transition, 1,000 problems were then generated at each possible value of p2 from 0:01 to 0:99 in steps of 0:01. In Figure 2, we plot the performance of each heuristic as we increase n. To make comparison between the graphs easier, in Figures 2(a) and (b) we again plot mean performance against the constrainedness, of the randomly generated problems. The ranking of the heuristics remains the same as in the previous experiment for constraint graphs of intermediate density. We observed similar behaviour in the distribution of performance 11 (e.g. the median, 95% and higher percentiles)4. As before, the di erences between the heuristics tend to open up in the higher percentiles in the middle of the phase transition. In Figure 2(c) we plot the peak in average search e ort in the phase transition region for each value of n. This then gives a contour showing how search cost increases with n, for this class of problem. The Figure suggests that Bz, Kappa and E(N) scale in a similar manner. There is insu cient data to know if the slight narrowing of the gap between Bz and both Kappa and E(N) as n increases is signi cant, and if Kappa and E(N) eventually overtake Bz. However, Rho and FF appear to scale less well. The gradients of Figure 2(c) suggests that FF and Rho scale with larger exponents than Bz, Kappa and E(N). 5.3 Problems with non-uniform constraint tightness All experiments considered above have constraints generated uniformly. That is, a single value of p2 describes the tightness of every constraint. At the start of search, every constraint is equally tight, so a good measure of the constrainedness of a variable is simply the number of constraints involving this variable (i.e. the variable's degree), together with its domain size. Even as we progress through search and tightnesses vary, this measure should still be reasonably accurate. This might explain why Bz has never been signi cantly worse in earlier experiments than Kappa or E(N) which undertake the computationally heavy overhead of measuring exact constraint tightnesses. However, if we are given a problem with signi cantly varying constraint tightnesses we must take account of this to measure constrainedness accurately. We therefore expect that Bz and FF may perform poorly on problems with varying constraint tightnesses, while the other heuristics introduced in this paper should perform well, because they do take account of constraint tightness. To test this hypothesis, we generated problems with mainly loose constraints, but a small number of very tight constraints. We did this by generating problems with a multiple of 5 constraints, and choosing exactly 20% of these constraints to have tightness p2 = 0:8 (i.e. tight constraints) and the remainder tightness p2 = 0:2 (i.e. loose constraints). We expect Bz to perform poorly on these problems as it will tie-break on the number of constraints and not the tightness of those constraints (the more signi cant factor in this problem class). We used h30; 10i, and to observe a phase transition we varied the constraint graph density, p1 from 1 87 to 1 in steps of 1 87 . Results are plotted in Figure 3. The 50% solubility point is at 0:64 when p1 = 23 87. Median performance, Figure 3(a), shows that as predicted Kappa and 4Graphs are not shown. 12
منابع مشابه
An Empirical Study of Structural Symmetry Breaking
We present an empirical comparison between dynamic and static methods for structural symmetry breaking (SSB). SSB was recently introduced as the first method for breaking all piecewise value and variable symmetries in a constraint satisfaction problem. So far it is unclear which of the two techniques is better suited for piecewise symmetric CSPs, and this study presents results from the first e...
متن کاملDead-End Driven Learning
The paper evaluates the e ectiveness of learning for speeding up the solution of constraint satisfaction problems. It extends previous work (Dechter 1990) by introducing a new and powerful variant of learning and by presenting an extensive empirical study on much larger and more di cult problem instances. Our results show that learning can speed up backjumping when using either a xed or dynamic...
متن کاملMulti Level Variable Ordering Heuristics for the Constraint Satisfaction Problem
The usual way for solving constraint satisfaction problems is to use a backtracking algorithm One of the key factors in its e ciency is the rule it will use to decide on which variable to branch next namely the variable ordering heuristics In this paper we attempt to give a general formulation of dynamic variable ordering heuristics that take into account the properties of the neighborhood of t...
متن کاملIn Search of the Best Constraint Satisfaction Search 3
We present the results of an empirical study of several constraint satisfaction search algorithms and heuristics. Using a random problem generator that allows us to create instances with given characteristics, we show how the relative performance of various search methods varies with the number of variables, the tightness of the constraints, and the sparseness of the constraint graph. A version...
متن کاملIn Search of the Best Constraint Satisfaction Search
We present the results of an empirical study of several constraint satisfaction search algorithms and heuristics. Using a random problem generator that allows us to create instances with given characteristics, we show how the relative performance of various search methods varies with the number of variables, the tightness of the constraints, and the sparseness of the constraint graph. A version...
متن کاملDynamic Variable Ordering in CSPs
We investigate the dynamic variable ordering (DVO) technique commonly used in conjunction with tree-search algorithms for solving constraint satisfaction problems. We first provide an implementation methodology for adding DVO to an arbitrary tree-search algorithm. Our methodology is applicable to a wide range of algorithms including those that maintain complicated information about the search h...
متن کامل